Tackling Toxicity Online: Developing Comprehensive Approaches for Online Risk Detection

 

As we spend more time online, it's become clear that there are some serious issues we need to deal with. Things like cyberbullying, harassment, and hate speech are all too common on social media platforms. Most of the previous research in this space is predominantly focused on isolated aspects of online risk detection and often lacks a complete view of the problem. For example for online risk detection systems, it is crucial to ensure that the models and systems developed can effectively address real-world challenges, as online spaces are continuously evolving. There is a need to collect datasets that mirror the complexity and diversity of the online environment, encompassing various platforms, including their multimodal nature, subtle nuances, contextual information, and evolving trends. In light of these changing dynamics, my research takes a comprehensive approach to detecting and reducing abusive behavior online.

 

Collection of Ecologically Valid Datasets

In my research I put particular emphasis on the need to collect ecologically valid datasets for accurate predictions in real-world scenarios. For example, my studies on youth risks on Instagram published at CHI'22 and sexual risk detection (CSCW'23) highlight the importance of including victims' genuine experiences. I integrate annotations provided by the youth themselves, allowing their lived experiences to shape the dataset. Additionally, I explore both the public and private aspects of youth's online lives. This emphasis on authentic data collection contributes to reliable risk detection solutions.

 

Multi-modal Feature Extraction

Previous studies have predominantly focused on analyzing textual content using techniques like natural language processing (NLP). However, my research published in CSCW'23 reveals a notable shift toward the prominent use of images and videos, particularly among younger users. Secondly, contextual features, as well as human-centered features, can be used to make risk detection more tailored and effective. This is important because of the growing adoption of end-to-end encryption (E2EE) on platforms like Meta, which makes linguistic and semantic features alone insufficient for E2EE scenarios. As such, my research explores the multi-modal nature of online conversations to detect risks by harnessing meta-level data inferred from conversational patterns.

 

Proactive Risk detection

My research aims to shift the paradigm of automated risk moderation from reactive, one-size-fits-all measures to proactive, content-aware solutions. In this study, we recognized a significant gap in the existing literature on hate speech detection, particularly the ineffectiveness of current systems that heavily rely on outdated lexicons, therefore I use word embeddings and a hybrid approach (lexicon-based + unsupervised learning) to perform adaptive risk detection.

 

Cross-platform Analysis

Attackers coordinate on a platform and carry out attacks on another one. However, most research in this space has looked at risky behavior in a siloed fashion, evaluating the effect that these actions have on the platforms the accounts were banned from. Users are obviously not bound to a single platform but can migrate to other online services where moderation is possibly laxer. In fact, my research published at WebSci'21 shows that once hateful users get banned from Twitter, they often move to Gab, an alternative social network with an open lack of moderation marketed as protection of "free speech". This has important implications for risk detection systems being used to ban users and the repercussions of active moderation.